25 research outputs found

    On the Processing and Analysis of Microtexts: From Normalization to Semantics

    Get PDF
    Trátase dun resumo estendido da ponencia[Abstract] User-generated content published on microblogging social platforms constitutes an invaluable source of information for diverse purposes: health surveillance, business intelligence, political analysis, etc. We present an overview of our work on the field of microtext processing covering the entire pipeline: from input preprocessing to high-level text mining applications.Ministerio de Economía Industria y Competitividad; FFI2014-51978-C2-2-RMinisterio de Economía Industria y Competitividad; TIN2017–85160–C2–1-RMinisterio de Economía Industria y Competitividad; TIN2017–85160–C2–2-RMinisterio de Economía Industria y Competitividad; BES-2015-073768Ministerio de Economía Industria y Competitividad; FFI2014-51978-C2-1-RXunta de Galicia; ED431D 2017/1

    Shallow Recurrent Neural Network for Personality Recognition in Source Code

    Get PDF
    [Abstract] Personality recognition in source code constitutes a novel task in the field of author profiling on written text. In this paper we describe our proposal for the PR-SOCO shared task in FIRE 2016, which is based on a shallow recurrent LSTM neural network that tries to predict five personalityaits of the author given a source code fragment. Our preliminary results show that it should be possible to tackle the problem at hand with our approach but also that there is still room for improvement through more complex network architectures and training processesMinisterio de Economía y Competitividad; FFI2014-51978-C2-1-RMinisterio de Economía y Competitividad; FFI2014-51978-C2-2-

    Towards Robust Word Embeddings for Noisy Texts

    Get PDF
    [Abstract] Research on word embeddings has mainly focused on improving their performance on standard corpora, disregarding the difficulties posed by noisy texts in the form of tweets and other types of non-standard writing from social media. In this work, we propose a simple extension to the skipgram model in which we introduce the concept of bridge-words, which are artificial words added to the model to strengthen the similarity between standard words and their noisy variants. Our new embeddings outperform baseline models on noisy texts on a wide range of evaluation tasks, both intrinsic and extrinsic, while retaining a good performance on standard texts. To the best of our knowledge, this is the first explicit approach at dealing with these types of noisy texts at the word embedding level that goes beyond the support for out-of-vocabulary words.Ministerio de Economía, Industria y Competitividad. MINECO; TIN2017-85160-C2-2-RMinisterio de Economía, Industria y Competitividad. MINECO; TIN2017-85160-C2-1-REuropean Social Fund. ESF; BES-2015-073768Xunta de Galicia; ED431D 2017/12Xunta de Galicia; ED431B 2017/01Xunta de Galicia; ED431C 2020/11Xunta de Galicia; ED431G/0

    Spanish word segmentation through neural language models

    Get PDF
    En las plataformas de microblogging abundan ciertos tokens especiales como los hashtags o las menciones en los que un grupo de palabras se escriben juntas sin espaciado entre ellas; p.ej.: #añobisiesto o @ryanreynoldsnet. Debido a la forma en que se escriben este tipo de textos, este fenómeno de ensamblado de palabras puede aparecer junto a su opuesto, la segmentación de palabras, afectando a cualquier elemento del texto y dificultando su análisis. En este trabajo se muestra un enfoque algorítmico que utiliza como base un modelo del lenguaje - en nuestro caso concreto uno basado en redes neuronales - para resolver el problema de la segmentación y ensamblado de palabras, en el que se trata de recuperar el espaciado estándar de las palabras que han sufrido alguna de estas transformaciones añadiendo o quitando espacios donde corresponda. Los resultados obtenidos son prometedores e indican que tras un mayor refinamiento del modelo del lenguaje se podrá sobrepasar al estado del arte.In social media platforms special tokens abound such as hashtags and mentions in which multiple words are written together without spacing between them; e.g. #leapyear or @ryanreynoldsnet. Due to the way this kind of texts are written, this word assembly phenomenon can appear with its opposite, word segmentation, affecting any token of the text and making it more difficult to perform analysis on them. In this work we show an algorithmic approach based on a language model - in this case a neural model - to solve the problem of the segmentation and assembly of words, in which we try to recover the standard spacing of the words that have suffered one of these transformations by adding or deleting spaces when necessary. The promising results indicate that after some further refinement of the language model it will be possible to surpass the state of the art.Este trabajo ha sido parcialmente financiado por el Ministerio de Economía y Competitividad español a través de los proyectos FFI2014-51978-C2-1-R y FFI2014-51978-C2-2-R, y por la Xunta de Galicia a través del programa Oportunius

    On the performance of phonetic algorithms in microtext normalization

    Get PDF
    User–generated content published on microblogging social networks constitutes a priceless source of information. However, microtexts usually deviate from the standard lexical and grammatical rules of the language, thus making its processing by traditional intelligent systems very difficult. As an answer, microtext normalization consists in transforming those non–standard microtexts into standard well–written texts as a preprocessing step, allowing traditional approaches to continue with their usual processing. Given the importance of phonetic phenomena in non–standard text formation, an essential element of the knowledge base of a normalizer would be the phonetic rules that encode these phenomena, which can be found in the so–called phonetic algorithms. In this work we experiment with a wide range of phonetic algorithms for the English language. The aim of this study is to determine the best phonetic algorithms within the context of candidate generation for microtext normalization. In other words, we intend to find those algorithms that taking as input non–standard terms to be normalized allow us to obtain as output the smallest possible sets of normalization candidates which still contain the corresponding target standard words. As it will be stated, the choice of the phonetic algorithm will depend heavily on the capabilities of the candidate selection mechanism which we usually find at the end of a microtext normalization pipeline. The faster it can make the right choices among big enough sets of candidates, the more we can sacrifice on the precision of the phonetic algorithms in favour of coverage in order to increase the overall performance of the normalization systemAgencia Estatal de Investigación | Ref. TIN2017-85160-C2-1-RAgencia Estatal de Investigación | Ref. TIN2017-85160-C2-2-RMinisterio de Economía y Competitividad | Ref. FFI2014-51978-C2-1-RMinisterio de Economía y Competitividad | Ref. FFI2014-51978-C2-2-RXunta de Galicia | Ref. ED431D-2017/12Xunta de Galicia | Ref. ED431B2017/01Xunta de Galicia | Ref. ED431D R2016/046Ministerio de Economía y Competitividad | Ref. BES-2015-07376

    Meemi: A Simple Method for Post-processing and Integrating Cross-lingual Word Embeddings

    Get PDF
    Word embeddings have become a standard resource in the toolset of any Natural Language Processing practitioner. While monolingual word embeddings encode information about words in the context of a particular language, cross-lingual embeddings define a multilingual space where word embeddings from two or more languages are integrated together. Current state-of-the-art approaches learn these embeddings by aligning two disjoint monolingual vector spaces through an orthogonal transformation which preserves the structure of the monolingual counterparts. In this work, we propose to apply an additional transformation after this initial alignment step, which aims to bring the vector representations of a given word and its translations closer to their average. Since this additional transformation is non-orthogonal, it also affects the structure of the monolingual spaces. We show that our approach both improves the integration of the monolingual spaces as well as the quality of the monolingual spaces themselves. Furthermore, because our transformation can be applied to an arbitrary number of languages, we are able to effectively obtain a truly multilingual space. The resulting (monolingual and multilingual) spaces show consistent gains over the current state-of-the-art in standard intrinsic tasks, namely dictionary induction and word similarity, as well as in extrinsic tasks such as cross-lingual hypernym discovery and cross-lingual natural language inference.Comment: 22 pages, 2 figures, 9 tables. Preprint submitted to Natural Language Engineerin
    corecore